23 research outputs found

    Intentional binding enhances hybrid BCI control

    Full text link
    Mental imagery-based brain-computer interfaces (BCIs) allow to interact with the external environment by naturally bypassing the musculoskeletal system. Making BCIs efficient and accurate is paramount to improve the reliability of real-life and clinical applications, from open-loop device control to closed-loop neurorehabilitation. By promoting sense of agency and embodiment, realistic setups including multimodal channels of communication, such as eye-gaze, and robotic prostheses aim to improve BCI performance. However, how the mental imagery command should be integrated in those hybrid systems so as to ensure the best interaction is still poorly understood. To address this question, we performed a hybrid EEG-based BCI experiment involving healthy volunteers enrolled in a reach-and-grasp action operated by a robotic arm. Main results showed that the hand grasping motor imagery timing significantly affects the BCI accuracy as well as the spatiotemporal brain dynamics. Higher control accuracy was obtained when motor imagery is performed just after the robot reaching, as compared to before or during the movement. The proximity with the subsequent robot grasping favored intentional binding, led to stronger motor-related brain activity, and primed the ability of sensorimotor areas to integrate information from regions implicated in higher-order cognitive functions. Taken together, these findings provided fresh evidence about the effects of intentional binding on human behavior and cortical network dynamics that can be exploited to design a new generation of efficient brain-machine interfaces.Comment: 18 pages, 5 figures, 7 supplementary material

    Comparison of strategies to elicit motor imagery-related brain patterns in multimodal BCI settings

    Get PDF
    International audienceCognitive tasks such as motor imagery (MI) used in Brain Machine Interface (BMI) present many issues: they are demanding, often counter-intuitive, and complex to describe to the subject during the instruction. Engaging feedback related to brain activity are key to maintain the subject involved in the task. We build a framework where the subject controls a robotic arm both by gaze and brain activity in an enriched environment using eye tracking glasses and electro-encephalography (EEG). In our study, we tackle the important question of the preferable moment to perform the MI task in the context of the robotic arm control. To answer this question, we design a protocol where subjects are placed in front of the robotic arm and choose with gaze which object to seize. Then based on stimuli blended in an augmented table, the subjects perform MI or resting state tasks. The stimuli consist of a red (MI) or blue (Resting) dot circling the object to seize. At the end of a MI task, the hand should close. There are three strategies corresponding to three different moments when to perform the mental task, 1) After the robot's movement towards the object, 2) Before the robot's movement, 3) Meanwhile the robot's movement. The experimentation is split into a calibration and two control phases, in the calibration phase, the hand always close during MI task, and in the control phases, it relies on the subject's brain activity. We rely on power spectral density estimate using Burg Auto regressive method to differentiate between MI and resting state in the alpha and beta bands (8-35 Hz). Our method to compare the strategies relies on classification performance (LDA 2 classes) using sensitivity, and statistical differences between conditions (R-squared map). The early results on the first 10 subjects show significant differences between strategy 1 and 2 for offline classification analysis and a trend on the real performance scores in favor of the strategy one. We observed in all subjects' brain activity localized in the motor cortex at significant level with regards to resting state. This indicates that the framework placing the subjects at the center with high sense of agency reinforced by gaze control is giving good results and allows to be more certain that the subject is doing the right task. Taken together, our results indicate that investigating the moment when to perform MI in the framework is a relevant parameter. The strategy where the robot is at the object level when MI is performed seems so far to be the best strategy

    Exploring strategies for multimodal BCIs in an enriched environment

    Get PDF
    International audienceBrain computer interfaces rely on cognitive tasks easy at first sight but that reveal to be complex to perform. In this context, providing engaging feedback and subject's embodiment is one of the keys for the overall system performance. However, noninvasive brain activity alone has been demonstrated to be often insufficient to precisely control all the degrees of freedom of complex external devices such as a robotic arm. Here, we developed a hybrid BCI that also integrates eye-tracking technology to improve the overall sense of agency of the subject. While this solution has been explored before, the best strategy on how to combine gaze and brain activity to obtain effective results has been poorly studied. To address this gap, we explore two different strategies where the timing to perform motor imagery changes; one strategy could be less intuitive compared to the other and this would result in differences of performance

    De la modélisation prédictive du comportement pathologique à l'application dans l'interaction Robot-Patient

    No full text
    Problématique et objectifs: Le cadre de mon travail de thèse traite du développement de solutions d'assistance robotisée pour l'aide des personnes atteintes de troubles du fonctionnels. La problématique qui est posée ici est l'apport des réseaux de neurones pour la modélisation du mouvement des personnes atteintes par le syndrome cérébelleux. Résultats obtenus : Dans le but d'améliorer la commande d'un robot d'assistance (Monimad), mon travail a consisté à développer des solutions qui permettent de fournir un modèle interne du mouvement d'une personne atteinte par le syndrome cérébelleux. Pour accomplir ce travail, une phase d'enregistrement de données a été mise en place pour des personnes saines et pour des personnes touchées par le syndrome cérébelleux. La modélisation a été déclinée en trois types : Génération de trajectoire : une simulation de mouvement réel a été mise en place avec des réseaux de neurones et comparée avec d'autres méthodes du domaine du controle optimal. Prédiction : une prédiction des mouvements futurs du patient est obtenue avec des réseaux de neurones, cette prédiction présente les propriétés de généralisation et de spécialisation. Son bon fonctionnement est validé sur des mouvements de patients. Observation : un outil de reconstruction basé sur les réseaux de neurones est proposé pour à partir d'une donnée angulaire reconstruire l'état d'une personne dans un mouvement de verticalisation. Enfin toutes ces modélisations sont appliquées à des mouvements de personnes atteintes du syndrôme cérébelleux qui utilisent une interface d'assistance. Ces méthodes de modélisation sont validées sur ce protocole.Data unavailabl

    Modélisation computationnelle de la coopération physique patient-robot : De la prédiction du mouvement pathologique pour la commande de robots d'assistance à l'étude de l'interaction humain-robot

    No full text
    This work presents how computational modeling can be used at different levels of human-robot interaction. In particular, we see how anticipation can improve this interaction at different time horizons but also how modeling can be done at different cognitive levels of the interaction.Short term anticipatory modeling allows to improve the control loop by decoupling the non-linearities giving information about the state of the interaction to adapt the parameters of the control. This short-term modeling also allows, as a predictor, to judge or supervise the quality of execution of the modeled movement, whether healthy or pathological.When the anticipation is in the medium term, the modeling allows us to observe the subject's intention. This describes, among other things, the interaction as a sequence of objectives of the robot, the model guessing the sequences of intentions. From the human's point of view, this collaboration with the machine seems natural and intuitive, as it takes into account changes of mind and errors.Finally, long-term anticipation allows having commands at the interaction level. It allows to set up of dynamic role changes (leader/follower) between the user and the robot. These models have been used in kinesthetic negotiation scenarios of direction (go left or right).Interaction modeling has also led to studies on the evocative power of the programmed action in the robot. This work presents some early and encouraging results on trust and agentivity during the interaction. Trust, for example, has been shown to be transmissible through the kinesthetic channel.This work opens perspectives on how such models can be used to improve the intelligibility of robots during their interactions with humans.Ce travail présente comment la modélisation computationnelle peut être utilisée à différents niveaux de l’interaction humain-robot. Nous voyons plus particulièrement comment l’anticipation permet d’améliorer cette interaction à différents horizons de temps mais aussi la modélisation peut se faire sur différents niveaux cognitifs de l’interaction.La modélisation anticipative à court terme permet d'améliorer la boucle de commande en découplant les non-linéarités donnant des informations de l’état de l’interaction pour adapter les paramètres de la commande. Cette modélisation à court terme permet aussi, en tant que prédicteur, de juger ou superviser la qualité d'exécution du mouvement modélisé qu’il soit sain ou pathologique.Lorsque l'anticipation est à moyen terme, la modélisation permet d’observer plutôt l'intention du sujet. Cela décrit entre autre l'interaction comme un enchaînement d'objectifs du robot, le modèle devinant les enchaînements d’intentions. Du point de vue de l'humain, cette collaboration avec la machine semble naturelle et intuitive, car elle prend en compte les changements d’avis et les erreurs.Enfin, une anticipation à long terme, permet d'avoir des commandes au niveau de l'interaction. Elle permet de mettre en place des changements dynamiques de rôles (leader/follower) entre l'utilisateur et le robot. Ces modèles ont été utilisés dans des scenarii de négociation kinesthésique de direction (aller à gauche ou à droite).La modélisation de l’interaction a mené aussi à des études sur le pouvoir d’évocation de l'action programmée dans le robot. Ce travail présente des résultats balbutiants et encourageants sur la confiance et l’agentivité durant l’interaction. La confiance, par exemple, s’est montrée transmissible par le biais du canal kinesthésique.Ce travail ouvre des perspectives sur comment de tels modèles peuvent être utilisés pour améliorer l’intelligibilité des robots durant ses interactions avec l’humain

    Implementation of haptic communication in comanipulative tasks: a statistical state machine model

    No full text
    — This paper presents an experimental evaluation of physical human-human interaction in lightweight condition using a one degree of freedom robotized setup. It explores possible origins of Physical Human-Human communication, more precisely, the hypothesis of a time based communication. To explore if the communication is correlated to time a statistical state machine model based on physical Human-Human interaction is proposed. The model is tested with 14 subjects and presents results that are close to human-human performances

    De la modélisation prédictive du comportement pathologique à l'application dans l'intéraction Robot-Patient

    No full text
    PARIS-BIUSJ-Thèses (751052125) / SudocPARIS-BIUSJ-Physique recherche (751052113) / SudocSudocFranceF
    corecore